Clip floating point constants to bf16 range to avoid inf conversion#20605
Merged
sgugger merged 1 commit intohuggingface:mainfrom Dec 6, 2022
sangeethabal:add_b16_support_to_avoid_nans
Merged
Clip floating point constants to bf16 range to avoid inf conversion#20605sgugger merged 1 commit intohuggingface:mainfrom sangeethabal:add_b16_support_to_avoid_nans
sgugger merged 1 commit intohuggingface:mainfrom
sangeethabal:add_b16_support_to_avoid_nans
Conversation
|
The documentation is not available anymore as the PR was closed or merged. |
sgugger
reviewed
Dec 6, 2022
Collaborator
sgugger
left a comment
There was a problem hiding this comment.
Thanks for opening a clean PR. I still have the same comment :-)
Also make sure you run make style on your branch to pass the quality tests on your PR.
src/transformers/modeling_utils.py
Outdated
Comment on lines
189
to
191
Collaborator
There was a problem hiding this comment.
As said before, we have a constant ENV_VARS_TRUE_VALUES in utils you should reuse here, to catch any declination of the user setting this environment variable. You should then test
os.environ.get(xxx, "0").upper() in ENV_VARS_TRUE_VALUES
mpierrau
pushed a commit
to mpierrau/transformers
that referenced
this pull request
Dec 15, 2022
…uggingface#20605) Co-authored-by: EC2 Default User <ec2-user@ip-172-31-40-169.us-west-2.compute.internal>
4 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
When running HuggingFace BERT (any size) fine-tuning tutorial with transformers version >= 4.21.0 and using XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, I see NaNs in the loss after the first step.
What does this PR do?
This PR addresses the issue where the model code passes a value that is out of range for XLA_USE_BF16=1 or XLA_DOWNCAST_BF16=1, so the conversion would cast it to -inf.
The NaNs likely come from the transformers library change: #17306 . This PR replaced many lines which used to be -float(inf) (or other small constants) with torch.finfo().min. For torch.float32 the min value is -3.4028234663852886e+38 which is smaller than the bfloat16 minimum of -3.3895313892515355e+38. So the problem is that torch.finfo(torch.float32).min = -3.4028234663852886e+38 gets converted to -inf. When the original encoder_extended_attention_mask is 1, then encoder_extended_attention_mask becomes (1.0 - 1.0 ) * -inf which becomes NaN (via IEEE rule Inf * 0.0 = NaN).
This PR ensures torch.finfo(torch.bfloat16).min = -3.3895313892515355e+38 and not -inf. Then the results would not have Nans.
The following lines checks for XLA_USE_BF16 or XLA_DOWNCAST_BF16 environment variable and sets the dtype accordingly:
Referencing related issues: aws-neuron/aws-neuron-sdk#593 and pytorch/xla#4152
Fixes # (issue)
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger